Goto

Collaborating Authors

 network diagram


Large Language Models in Analyzing Crash Narratives -- A Comparative Study of ChatGPT, BARD and GPT-4

Mumtarin, Maroa, Chowdhury, Md Samiullah, Wood, Jonathan

arXiv.org Artificial Intelligence

In traffic safety research, extracting information from crash narratives using text analysis is a common practice. With recent advancements of large language models (LLM), it would be useful to know how the popular LLM interfaces perform in classifying or extracting information from crash narratives. To explore this, our study has used the three most popular publicly available LLM interfaces- ChatGPT, BARD and GPT4. This study investigated their usefulness and boundaries in extracting information and answering queries related to accidents from 100 crash narratives from Iowa and Kansas. During the investigation, their capabilities and limitations were assessed and their responses to the queries were compared. Five questions were asked related to the narratives: 1) Who is at-fault? 2) What is the manner of collision? 3) Has the crash occurred in a work-zone? 4) Did the crash involve pedestrians? and 5) What are the sequence of harmful events in the crash? For questions 1 through 4, the overall similarity among the LLMs were 70%, 35%, 96% and 89%, respectively. The similarities were higher while answering direct questions requiring binary responses and significantly lower for complex questions. To compare the responses to question 5, network diagram and centrality measures were analyzed. The network diagram from the three LLMs were not always similar although they sometimes have the same influencing events with high in-degree, out-degree and betweenness centrality. This study suggests using multiple models to extract viable information from narratives. Also, caution must be practiced while using these interfaces to obtain crucial safety related information.


Active Inference in String Diagrams: A Categorical Account of Predictive Processing and Free Energy

Tull, Sean, Kleiner, Johannes, Smithe, Toby St Clere

arXiv.org Artificial Intelligence

We present a categorical formulation of the cognitive frameworks of Predictive Processing and Active Inference, expressed in terms of string diagrams interpreted in a monoidal category with copying and discarding. This includes diagrammatic accounts of generative models, Bayesian updating, perception, planning, active inference, and free energy. In particular we present a diagrammatic derivation of the formula for active inference via free energy minimisation, and establish a compositionality property for free energy, allowing free energy to be applied at all levels of an agent's generative model. Aside from aiming to provide a helpful graphical language for those familiar with active inference, we conversely hope that this article may provide a concise formulation and introduction to the framework.


Causal models in string diagrams

Lorenz, Robin, Tull, Sean

arXiv.org Artificial Intelligence

The framework of causal models provides a principled approach to causal reasoning, applied today across many scientific domains. Here we present this framework in the language of string diagrams, interpreted formally using category theory. A class of string diagrams, called network diagrams, are in 1-to-1 correspondence with directed acyclic graphs. A causal model is given by such a diagram with its components interpreted as stochastic maps, functions, or general channels in a symmetric monoidal category with a 'copy-discard' structure (cd-category), turning a model into a single mathematical object that can be reasoned with intuitively and yet rigorously. Building on prior works by Fong and Jacobs, Kissinger and Zanasi, as well as Fritz and Klingler, we present diagrammatic definitions of causal models and functional causal models in a cd-category, generalising causal Bayesian networks and structural causal models, respectively. We formalise general interventions on a model, including but beyond do-interventions, and present the natural notion of an open causal model with inputs. We also give an approach to conditioning based on a normalisation box, allowing for causal inference calculations to be done fully diagrammatically. We define counterfactuals in this setup, and treat the problems of the identifiability of causal effects and counterfactuals fully diagrammatically. The benefits of such a presentation of causal models lie in foundational questions in causal reasoning and in their clarificatory role and pedagogical value. This work aims to be accessible to different communities, from causal model practitioners to researchers in applied category theory, and discusses many examples from the literature for illustration. Overall, we argue and demonstrate that causal reasoning according to the causal model framework is most naturally and intuitively done as diagrammatic reasoning.


Structuralist analysis for neural network system diagrams

Marshall, Guy Clarke, Jay, Caroline, Freitas, Andre

arXiv.org Artificial Intelligence

This short paper examines diagrams describing neural network systems in academic conference proceedings. Many aspects of scholarly communication are controlled, particularly with relation to text and formatting, but often diagrams are not centrally curated beyond a peer review. Using a corpus-based approach, we argue that the heterogeneous diagrammatic notations used for neural network systems has implications for signification in this domain. We divide this into (i) what content is being represented and (ii) how relations are encoded. Using a novel structuralist framework, we use a corpus analysis to quantitatively cluster diagrams according to the author's representational choices. This quantitative diagram classification in a heterogeneous domain may provide a foundation for further analysis.


Visualizing the results of a Market Basket Analysis in SAS Viya

#artificialintelligence

One of the most exciting features from the newest release of Visual Data Mining and Machine Learning on SAS Viya is the ability to perform Market Basket Analysis on large amounts of transactional data. Market Basket Analysis allows companies to analyze large transactional files to identify significant relationships between items. While most commonly used by retailers, this technique can be used by any company that has transactional data. For this example, we will be looking at customer supermarket purchases over the past month. Customer is the Transaction ID; Time is the time of purchase; and Product is the item purchased.


The amazing predictive power of conditional probability in Bayes Nets

@machinelearnbot

Using conditional probability gives Bayes Nets strong analytical advantages over traditional regression-based models. This adds to several advantages we discussed in an earlier article. But what is conditional probability and what makes it different? In short, conditional probability means that the effects of one variable depend on, of flow from, the distribution of another variable (or others). The complete state of one variable determines how another acts.


HowNutsAreTheDutch: Personalized Feedback on a National Scale

Blaauw, Frank (University of Groningen) | Krieke, Lian van der (University of Groningen) | Bos, Elske (University of Groningen) | Emerencia, Ando (University of Groningen) | Jeronimus, Bertus F. (University of Groningen) | Schenk, Maria (University of Groningen) | Vos, Stijn de (University of Groningen) | Wanders, Rob (University of Groningen) | Wardenaar, Klaas (University of Groningen) | Wigman, Johanna T. W. (University of Groningen) | Aiello, Marco (University of Groningen) | Jonge, Peter de (University of Groningen)

AAAI Conferences

A paradigm shift is taking place in the field of men- tal healthcare and patient wellbeing. Traditionally, the attempts at sustaining and enhancing wellbeing were mainly based on the comparison of the individual with the population average. Recently, attention has shifted towards a more personal, idiographic approach. Such shift calls for new solutions to get data about individu- als, create personalized models of wellbeing and trans- lating these into personalized advice. Idiographic research can be conducted on a large scale by letting people measure themselves. Repeated collec- tion of data, for example by means of questionnaires, provides individuals feedback on and insight into their wellbeing. A way to partially automate this feedback process is by creating software that statistically ana- lyzes, using a method known as vector autoregression, repetitive questionnaire data to determine cause-effect relationships between the measured features. In this pa- per we describe a means to facilitate these repetitive measurements and to partially automate the feedback process. The paper provides an overview and technical description of such automated analyses software, named Autovar, and its use in an online self-measurement plat- form.